Research shows that large pre-trained language models (LLMs), such as GPT-3, can improve themselves through the RAIN method to meet human needs without additional data and fine-tuning. The RAIN method is widely applicable to various language generation tasks, requiring no additional models or data storage, nor does it rely on labeled data or training. RAIN enhances the performance of LLMs through self-assessment, reducing the success rate of adversarial attacks, and enabling AI to generate more coherent and secure responses. This study introduces RAIN as a technique for adjusting LLMs to align with human preferences.